30 research outputs found

    Compressive Mining: Fast and Optimal Data Mining in the Compressed Domain

    Full text link
    Real-world data typically contain repeated and periodic patterns. This suggests that they can be effectively represented and compressed using only a few coefficients of an appropriate basis (e.g., Fourier, Wavelets, etc.). However, distance estimation when the data are represented using different sets of coefficients is still a largely unexplored area. This work studies the optimization problems related to obtaining the \emph{tightest} lower/upper bound on Euclidean distances when each data object is potentially compressed using a different set of orthonormal coefficients. Our technique leads to tighter distance estimates, which translates into more accurate search, learning and mining operations \textit{directly} in the compressed domain. We formulate the problem of estimating lower/upper distance bounds as an optimization problem. We establish the properties of optimal solutions, and leverage the theoretical analysis to develop a fast algorithm to obtain an \emph{exact} solution to the problem. The suggested solution provides the tightest estimation of the L2L_2-norm or the correlation. We show that typical data-analysis operations, such as k-NN search or k-Means clustering, can operate more accurately using the proposed compression and distance reconstruction technique. We compare it with many other prevalent compression and reconstruction techniques, including random projections and PCA-based techniques. We highlight a surprising result, namely that when the data are highly sparse in some basis, our technique may even outperform PCA-based compression. The contributions of this work are generic as our methodology is applicable to any sequential or high-dimensional data as well as to any orthogonal data transformation used for the underlying data compression scheme.Comment: 25 pages, 20 figures, accepted in VLD

    Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation

    Full text link
    Sensor networks potentially feature large numbers of nodes that can sense their environment over time, communicate with each other over a wireless network, and process information. They differ from data networks in that the network as a whole may be designed for a specific application. We study the theoretical foundations of such large scale sensor networks, addressing four fundamental issues- connectivity, capacity, clocks and function computation. To begin with, a sensor network must be connected so that information can indeed be exchanged between nodes. The connectivity graph of an ad-hoc network is modeled as a random graph and the critical range for asymptotic connectivity is determined, as well as the critical number of neighbors that a node needs to connect to. Next, given connectivity, we address the issue of how much data can be transported over the sensor network. We present fundamental bounds on capacity under several models, as well as architectural implications for how wireless communication should be organized. Temporal information is important both for the applications of sensor networks as well as their operation.We present fundamental bounds on the synchronizability of clocks in networks, and also present and analyze algorithms for clock synchronization. Finally we turn to the issue of gathering relevant information, that sensor networks are designed to do. One needs to study optimal strategies for in-network aggregation of data, in order to reliably compute a composite function of sensor measurements, as well as the complexity of doing so. We address the issue of how such computation can be performed efficiently in a sensor network and the algorithms for doing so, for some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE

    Recursive Compressed Sensing

    Get PDF
    We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining the next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step estimation procedure is proposed comprising support set detection and signal amplitude estimation. Estimation accuracy is enhanced by a non-linear voting method and averaging estimates over multiple windows. We analyze the computational complexity and estimation error, and show that the normalized error variance asymptotically goes to zero for sublinear sparsity. Our simulation results show speed up of an order of magnitude over traditional CS, while obtaining significantly lower reconstruction error under mild conditions on the signal magnitudes and the noise level.Comment: Submitted to IEEE Transactions on Information Theor

    Communication-efficient distributed optimization with adaptability to system heterogeneity

    Full text link
    We consider the setting of agents cooperatively minimizing the sum of local objectives plus a regularizer on a graph. This paper proposes a primal-dual method in consideration of three distinctive attributes of real-life multi-agent systems, namely: (i)expensive communication, (ii)lack of synchronization, and (iii)system heterogeneity. In specific, we propose a distributed asynchronous algorithm with minimal communication cost, in which users commit variable amounts of local work on their respective sub-problems. We illustrate this both theoretically and experimentally in the machine learning setting, where the agents hold private data and use a stochastic Newton method as the local solver. Under standard assumptions on Lipschitz continuous gradients and strong convexity, our analysis establishes linear convergence in expectation and characterizes the dependency of the rate on the number of local iterations. We proceed a step further to propose a simple means for tuning agents' hyperparameters locally, so as to adjust to heterogeneity and accelerate the overall convergence. Last, we validate our proposed method on a benchmark machine learning dataset to illustrate the merits in terms of computation, communication, and run-time saving as well as adaptability to heterogeneity.Comment: This paper is accepted by the 62nd IEEE Conference on Decision and Control (CDC 2023

    Distributed optimization on directed graphs based on inexact ADMM with partial participation

    Full text link
    We consider the problem of minimizing the sum of cost functions pertaining to agents over a network whose topology is captured by a directed graph (i.e., asymmetric communication). We cast the problem into the ADMM setting, via a consensus constraint, for which both primal subproblems are solved inexactly. In specific, the computationally demanding local minimization step is replaced by a single gradient step, while the averaging step is approximated in a distributed fashion. Furthermore, partial participation is allowed in the implementation of the algorithm. Under standard assumptions on strong convexity and Lipschitz continuous gradients, we establish linear convergence and characterize the rate in terms of the connectivity of the graph and the conditioning of the problem. Our line of analysis provides a sharper convergence rate compared to Push-DIGing. Numerical experiments corroborate the merits of the proposed solution in terms of superior rate as well as computation and communication savings over baselines
    corecore